21 research outputs found

    Convolutional Neural Network for Material Decomposition in Spectral CT Scans

    Get PDF
    Spectral computed tomography acquires energy-resolved data that allows recovery of densities of constituents of an object. This can be achieved by decomposing the measured spectral projection into material projections, and passing these decomposed projections through a tomographic reconstruction algorithm, to get the volumetric mass density of each material. Material decomposition is a nonlinear inverse problem that has been traditionally solved using model-based material decomposition algorithms. However, the forward model is difficult to estimate in real prototypes. Moreover, the traditional regularizers used to stabilized inversions are not fully relevant in the projection domain.In this study, we propose a deep-learning method for material decomposition in the projection domain. We validate our methodology with numerical phantoms of human knees that are created from synchrotron CT scans. We consider four different scans for training, and one for validation. The measurements are corrupted by Poisson noise, assuming that at most 10 5 photons hit the detector. Compared to a regularized Gauss-Newton algorithm, the proposed deep-learning approach provides a compromise between noise and resolution, which reduces the computation time by a factor of 100

    Material Decomposition in Spectral CT using deep learning: A Sim2Real transfer approach

    Get PDF
    The state-of-the art for solving the nonlinear material decomposition problem in spectral computed tomography is based on variational methods, but these are computationally slow and critically depend on the particular choice of the regularization functional. Convolutional neural networks have been proposed for addressing these issues. However, learning algorithms require large amounts of experimental data sets. We propose a deep learning strategy for solving the material decomposition problem based on a U-Net architecture and a Sim2Real transfer learning approach where the knowledge that we learn from synthetic data is transferred to a real-world scenario. In order for this approach to work, synthetic data must be realistic and representative of the experimental data. For this purpose, numerical phantoms are generated from human CT volumes of the KiTS19 Challenge dataset, segmented into specific materials (soft tissue and bone). These volumes are projected into sinogram space in order to simulate photon counting data, taking into account the energy response of the scanner. We compared projection- and image-based decomposition approaches where the network is trained to decompose the materials either in the projection or in the image domain. The proposed Sim2Real transfer strategies are compared to a regularized Gauss-Newton (RGN) method on synthetic data, experimental phantom data and human thorax data

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    Human Knee Phantom for Spectral CT: Validation of a Material Decomposition Algorithm

    No full text
    International audienceOsteoarthritis is the most common degenerative joint disease. Spectral computed tomography generates energy-resolved data which enable identification of materials within the sample and offer improved soft tissue contrast compared to conventional X-ray CT. In this work, we propose a realistic numerical phantom of a knee to assess the feasibility of spectral CT for osteoarthritis. The phantom is created from experimental synchrotron CT mono-energetic images. After simulating spectral CT data, we perform material decomposition using Gauss-Newton method, for different noise levels. Then, we reconstruct virtual mono-energetic images. We compare decompositions and mono-energetic images with the phantom using mean-squared error. When performing material decomposition and tomographic reconstruction, we obtain less than 1 % error for both, using noisy data. Moreover , it is possible to see cartilage with naked eye on virtual mono-energetic images. This phantom has great potential to assess the feasibility and current limitations of spectral CT to characterize knee osteoarthritis

    A residual U-Net network with image prior for 3D image denoising

    No full text
    International audienceDenoising algorithms via sparse representation are among the state-of-the art for image restoration. On previous work, we proposed SPADE-a sparse-and prior-based method for 3D-image denoising. In this work, we extend this idea to learning approaches and propose a novel residual-U-Net prior-based (ResPrU-Net) method that exploits a prior image. The proposed ResPrU-Net architecture has two inputs, the noisy image and the prior image, and a residual connection that connects the prior image to the output of the network. We compare ResPrU-Net to U-Net and SPADE on human knee data acquired on a spectral computerized tomography scanner. The prior image is built from the noisy image by combining information from neighbor slices and it is the same for both SPADE and ResPrU-Net. For deep learning approaches, we use four knee samples and data augmentation for training, one knee for validation and two for test. Results show that for high noise, U-Net leads to worst results, with images that are excessively blurred. Prior-based methods, SPADE and ResPrU-Net, outperformed U-Net, leading to restored images that present similar image quality than the target. ResPrU-Net provides slightly better results than SPADE. For low noise, methods present similar results

    MATERIAL DECOMPOSITION PROBLEM IN SPECTRAL CT: A TRANSFER DEEP LEARNING APPROACH

    No full text
    International audienceCurrent model-based variational methods used for solving the non-linear material decomposition problem in spectral computed tomog-raphy rely on prior knowledge of the scanner energy response, but this is generally unknown or spatially varying. We propose a two-step deep transfer learning approach that can learn the energy response of the scanner and its variation across the detector pixels. First, we pretrain U-Net on a large data set assuming ideal data, and, second, we fine-tune the pretrained model using few data corresponding to a non-ideal scenario. We assess it on numerical thorax phantoms that comprise soft tissue, bone and kidneys marked with gadolinium, which are built from the kits19 dataset. We find that the proposed method solves the material decomposition problem without prior knowledge of the scanner energy response. We compare our approach to a regularized Gauss-Newton method and obtain a superior image quality

    Material decomposition problem in spectral CT:a transfer deep learning approach

    Get PDF
    Abstract Current model-based variational methods used for solving the nonlinear material decomposition problem in spectral computed tomography rely on prior knowledge of the scanner energy response, but this is generally unknown or spatially varying. We propose a twostep deep transfer learning approach that can learn the energy response of the scanner and its variation across the detector pixels. First, we pretrain U-Net on a large data set assuming ideal data, and, second, we fine-tune the pretrained model using few data corresponding to a non-ideal scenario. We assess it on numerical thorax phantoms that comprise soft tissue, bone and kidneys marked with gadolinium, which are built from the kits19 dataset. We find that the proposed method solves the material decomposition problem without prior knowledge of the scanner energy response. We compare our approach to a regularized Gauss-Newton method and obtain a superior image quality
    corecore